Goto

Collaborating Authors

 adversarial divergence


Approximation and Convergence Properties of Generative Adversarial Learning

Neural Information Processing Systems

Despite their empirical success, however, two very basic questions on how well they can approximate the target distribution remain unanswered. First, it is not known how restricting the discriminator family affects the approximation quality. Second, while a number of different objective functions have been proposed, we do not understand when convergence to the global minima of the objective function leads to convergence to the target distribution under various notions of distributional convergence. In this paper, we address these questions in a broad and unified setting by defining a notion of adversarial divergences that includes a number of recently proposed objective functions. We show that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect. Additionally, we show that for objective functions that are strict adversarial divergences, convergence in the objective function implies weak convergence, thus generalizing previous results.


Approximation and Convergence Properties of Generative Adversarial Learning

Neural Information Processing Systems

Despite their empirical success, however, two very basic questions on how well they can approximate the target distribution remain unanswered. First, it is not known how restricting the discriminator family affects the approximation quality. Second, while a number of different objective functions have been proposed, we do not understand when convergence to the global minima of the objective function leads to convergence to the target distribution under various notions of distributional convergence. In this paper, we address these questions in a broad and unified setting by defining a notion of adversarial divergences that includes a number of recently proposed objective functions. We show that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect. Additionally, we show that for objective functions that are strict adversarial divergences, convergence in the objective function implies weak convergence, thus generalizing previous results.




Generalization Properties of Optimal Transport GANs with Latent Distribution Learning

Luise, Giulia, Pontil, Massimiliano, Ciliberto, Carlo

arXiv.org Machine Learning

Generative Adversarial Networks (GAN) are powerful methods for learning probability measures and performing realistic sampling [21]. Algorithms in this class aim to reproduce the sampling behavior of the target distribution, rather than explicitly fitting a density function. This is done by modeling the target probability as the pushforward of a probability measure in a latent space. Since their introduction, GANs have achieved remarkable progress. From a practical perspective, a large number of model architectures have been explored, leading to impressive results in data generation [48, 24, 26]. From the side of theory, attention has been devoted to identify rich metrics for generator training, such as f-divergences [36], integral probability metrics (IPM) [16] or optimal transport distances [3], as well as studying their approximation properties [28, 5, 51]. From the statistical perspective, progression has been slower. While recent work has set the first steps towards a characterization of the generalization properties of GANs with IPM loss functions [4, 51, 27, 41, 45], a full theoretical understanding of the main building blocks of the GAN framework is still missing. In this work, we focus on optimal transport-based loss functions [20] and study the impact of two key quantities of the GANs paradigm on the overall generalization performance.


Approximation and Convergence Properties of Generative Adversarial Learning

Liu, Shuang, Bousquet, Olivier, Chaudhuri, Kamalika

Neural Information Processing Systems

Despite their empirical success, however, two very basic questions on how well they can approximate the target distribution remain unanswered. First, it is not known how restricting the discriminator family affects the approximation quality. Second, while a number of different objective functions have been proposed, we do not understand when convergence to the global minima of the objective function leads to convergence to the target distribution under various notions of distributional convergence. In this paper, we address these questions in a broad and unified setting by defining a notion of adversarial divergences that includes a number of recently proposed objective functions. We show that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect.


First Order Generative Adversarial Networks

Seward, Calvin, Unterthiner, Thomas, Bergmann, Urs, Jetchev, Nikolay, Hochreiter, Sepp

arXiv.org Machine Learning

GANs excel at learning high dimensional distributions, but they can update generator parameters in directions that do not correspond to the steepest descent direction of the objective. Prominent examples of problematic update directions include those used in both Goodfellow's original GAN and the WGAN-GP. To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction. These requirements guarantee unbiased mini-batch updates in the direction of steepest descent. We propose a novel divergence which approximates the Wasserstein distance while regularizing the critic's first order information. Together with an accompanying update direction, this divergence fulfills the requirements for unbiased steepest descent updates. We verify our method, the First Order GAN, with CelebA image generation and set a new state of the art on the One Billion Word language generation task. Code to reproduce experiments is available https://github.com/


Approximation and Convergence Properties of Generative Adversarial Learning

Liu, Shuang, Bousquet, Olivier, Chaudhuri, Kamalika

Neural Information Processing Systems

Despite their empirical success, however, two very basic questions on how well they can approximate the target distribution remain unanswered. First, it is not known how restricting the discriminator family affects the approximation quality. Second, while a number of different objective functions have been proposed, we do not understand when convergence to the global minima of the objective function leads to convergence to the target distribution under various notions of distributional convergence. In this paper, we address these questions in a broad and unified setting by defining a notion of adversarial divergences that includes a number of recently proposed objective functions. We show that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect. Additionally, we show that for objective functions that are strict adversarial divergences, convergence in the objective function implies weak convergence, thus generalizing previous results.


Approximation and Convergence Properties of Generative Adversarial Learning

Liu, Shuang, Bousquet, Olivier, Chaudhuri, Kamalika

arXiv.org Machine Learning

Despite their empirical success, however, two very basic questions on how well they can approximate the target distribution remain unanswered. First, it is not known how restricting the discriminator family affects the approximation quality. Second, while a number of different objective functions have been proposed, we do not understand when convergence to the global minima of the objective function leads to convergence to the target distribution under various notions of distributional convergence. In this paper, we address these questions in a broad and unified setting by defining a notion of adversarial divergences that includes a number of recently proposed objective functions. We show that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect. Additionally, we show that for objective functions that are strict adversarial divergences, convergence in the objective function implies weak convergence, thus generalizing previous results.